As the COVID-19 pandemic puts pressure on healthcare systems worldwide, the computed tomography image based AI diagnostic system has become a sustainable solution for early diagnosis. However, the model-wise vulnerability under adversarial perturbation hinders its deployment in practical situation. The existing adversarial training strategies are difficult to generalized into medical imaging field challenged by complex medical texture features. To overcome this challenge, we propose a Contour Attention Preserving (CAP) method based on lung cavity edge extraction. The contour prior features are injected to attention layer via a parameter regularization and we optimize the robust empirical risk with hybrid distance metric. We then introduce a new cross-nation CT scan dataset to evaluate the generalization capability of the adversarial robustness under distribution shift. Experimental results indicate that the proposed method achieves state-of-the-art performance in multiple adversarial defense and generalization tasks. The code and dataset are available at https://github.com/Quinn777/CAP.
translated by 谷歌翻译
Word alignment is to find translationally equivalent words between source and target sentences. Previous work has demonstrated that self-training can achieve competitive word alignment results. In this paper, we propose to use word alignments generated by a third-party word aligner to supervise the neural word alignment training. Specifically, source word and target word of each word pair aligned by the third-party aligner are trained to be close neighbors to each other in the contextualized embedding space when fine-tuning a pre-trained cross-lingual language model. Experiments on the benchmarks of various language pairs show that our approach can surprisingly do self-correction over the third-party supervision by finding more accurate word alignments and deleting wrong word alignments, leading to better performance than various third-party word aligners, including the currently best one. When we integrate all supervisions from various third-party aligners, we achieve state-of-the-art word alignment performances, with averagely more than two points lower alignment error rates than the best third-party aligner. We released our code at https://github.com/sdongchuanqi/Third-Party-Supervised-Aligner.
translated by 谷歌翻译
基于卷积的方法在医疗图像分割任务中提供了良好的分割性能。但是,这些方法在处理医学图像的边缘时面临以下挑战:(1)以前的基于卷积的方法不关注分割边缘周围前景和背景之间的边界关系,从而导致分割性能的退化当边缘变化时。 (2)卷积层的电感偏置不能适应复杂的边缘变化和多分段区域的聚合,从而导致其性能改善大部分仅限于分割分段区域而不是边缘的范围。为了应对这些挑战,我们提出了MFI(多尺度特征交互)块和英亩(轴向上下文关系编码器)块上的CM-MLP框架,以精确分割医疗图像的边缘。在MFI块中,我们建议级联多尺度MLP(Cascade MLP)同时从网络的较深层中处理所有局部信息,并利用CASCADE多尺度机制逐渐融合离散的本地信息。然后,英亩块用于使深度监督着眼于探索前景和背景之间的边界关系以修改医疗图像的边缘。我们提议的CM-MLP框架的分割准确性(DICE)达到96.96%,96.76%和82.54%的三个基准数据集:CVC-ClinicDB数据集,Sub-Kvasir Dataset和我们的内部数据集,这些数据集分别超过了。最先进的方法。源代码和训练有素的模型将在https://github.com/programmerhyy/cm-mlp上找到。
translated by 谷歌翻译
长期椎骨骨折严重影响了患者的生活质量,导致脑诊断,腰椎畸形甚至瘫痪。计算机断层扫描(CT)是在早期筛查该疾病的常见临床检查。但是,微弱的放射学表现和非特异性症状导致遗体诊断的高风险。特别是,对于深度学习模型和缺乏经验的医生而言,轻度骨折和正常对照很难区分。在本文中,我们认为增强微弱的断裂特征以鼓励阶层间的可分离性是提高准确性的关键。在此激励的情况下,我们提出了一个基于对比度学习的监督模型,以通过CT扫描估算Genent的椎骨骨折等级。作为一项辅助任务,受监督的对比学习在将其他人推开的同时缩小了同一类中特征的距离,从而增强了模型捕获椎骨骨折的微妙特征的能力。考虑到该领域缺乏数据集,我们构建了一个数据库,其中包括经验丰富的放射科医生注释的208个样本。我们的方法的特异性为99 \%,在二元分类中的敏感性为85%,在多分类中的Macio-F1为77 \%,表明对比度学习显着提高了椎骨骨折筛选的准确性,尤其是在轻度断裂和正常对照。我们的脱敏数据和代码将公开为社区提供。
translated by 谷歌翻译
近年来,几项作品采用了卷积神经网络(CNN)来诊断基于X射线图像或磁共振成像(MRI)的股骨头(AVNFH)的无血管坏死。但是,由于组织重叠,X射线图像很难为早期诊断提供细粒度。另一方面,MRI的成像时间很长,更昂贵,使其在大规模筛查中不切实际。计算机断层扫描(CT)显示了层的组织,图像速度更快,并且比MRI成本较小。但是,据我们所知,对于基于CT的AVNFH诊断没有工作。在这项工作中,我们收集并标记为AVNFH排名的大型数据集。此外,现有的端到端CNN仅产生分类结果,并且很难为诊断医生提供更多信息。为了解决这个问题,我们提出了结构正规化的专注网络(Sranet),该网络能够根据贴剂注意力在分类过程中突出坏死区域。 Sranet提取物在图像块中的特征,通过注意机制获得重量以汇总特征,并通过具有先验知识的结构正常化程序来限制它们以改善概括。 Sranet在我们的AVNFH-CT数据集上进行了评估。实验结果表明,Sranet优于CNN,用于AVNFH分类,此外,它可以定位病变并提供更多信息以帮助医生进行诊断。我们的代码在https://github.com/tomas-lilingfeng/sranet上公开。
translated by 谷歌翻译
尽管深入学习算法已被深入开发用于计算机辅助结核病诊断(CTD),但它们主要依赖于精心注释的数据集,从而导致了大量时间和资源消耗。弱监督的学习(WSL)利用粗粒标签来完成精细的任务,具有解决此问题的潜力。在本文中,我们首先提出了一个新的大规模结核病(TB)胸部X射线数据集,即结核病胸部X射线属性数据集(TBX-ATT),然后建立一个属性辅助的弱点监督的框架来分类并通过利用属性信息来克服WSL方案中的监督不足来定位结核病。具体而言,首先,TBX-ATT数据集包含2000个X射线图像,其中具有七种用于TB关系推理的属性,这些属性由经验丰富的放射科医生注释。它还包括带有11200 X射线图像的公共TBX11K数据集,以促进弱监督检测。其次,我们利用一个多尺度特征交互模型,用于TB区域分类和属性关系推理检测。在TBX-ATT数据集上评估了所提出的模型,并将作为未来研究的稳固基准。代码和数据将在https://github.com/gangmingzhao/tb-attribute-weak-localization上获得。
translated by 谷歌翻译
本文介绍了Thuee团队的语音识别系统,用于IARPA Open自动语音识别挑战(OpenASR21),并进行了进一步的实验探索。我们在受限和受约束的训练条件下取得了出色的成果。对于受限的训练条件,我们基于标准混合体系结构构建基本ASR系统。为了减轻摄影库(OOV)的问题,我们使用针对OOV和潜在的新单词的素式至phoneme(G2P)技术扩展了发音词典。采用了标准的声学模型结构,例如CNN-TDNN-F和CNN-TDNN-F-A。此外,还应用了多种数据增强技术。对于约束训练条件,我们使用自我监督的学习框架WAV2VEC2.0。我们在公开可用的预训练XLSR-53的基础上使用连接式时间分类(CTC)标准进行各种微调技术。我们发现,在将WAV2VEC2.0预训练的模型应用于基于编码器的CTC/CTC/COATION ASR体系结构时,前端特征提取器在将WAV2VEC2.0预训练的模型应用时起着重要作用。通过将目标语言用作为前端功能提取器使用的CTC模型填充可以实现额外的改进。
translated by 谷歌翻译
尽管通过深度卷积神经网络进行了视频理解的巨大进展,但现有方法学到的特征表示可能偏置到静态视觉线索。为了解决这个问题,我们提出了一种基于自我监督视频表示学习的概率分析来抑制静态视觉提示(SSVC)的新方法。在我们的方法中,首先编码视频帧以通过标准化流程根据标准正常分布获得潜在变量。通过将视频中的静态因子建模为随机变量,每个潜在变量的条件分布变为偏移并缩放正常。然后,选择沿着时间的较大潜伏变量作为静态线索并抑制以生成运动保留的视频。最后,通过运动保存的视频构建了正对,以便对比学习,以减轻对静态线索的表示偏差问题。较少偏置的视频表示可以更广泛地推广到各种下游任务。关于公开的基准测试的广泛实验表明,当仅使用单个RGB模型用于预训练时,所提出的方法优于现有技术。
translated by 谷歌翻译
Vision-Language Pre-Training (VLP) has shown promising capabilities to align image and text pairs, facilitating a broad variety of cross-modal learning tasks. However, we observe that VLP models often lack the visual grounding/localization capability which is critical for many downstream tasks such as visual reasoning. In this work, we propose a novel Position-guided Text Prompt (PTP) paradigm to enhance the visual grounding ability of cross-modal models trained with VLP. Specifically, in the VLP phase, PTP divides the image into $N\times N$ blocks, and identifies the objects in each block through the widely used object detector in VLP. It then reformulates the visual grounding task into a fill-in-the-blank problem given a PTP by encouraging the model to predict the objects in the given blocks or regress the blocks of a given object, e.g. filling `P" or ``O" in aPTP ``The block P has a O". This mechanism improves the visual grounding capability of VLP models and thus helps them better handle various downstream tasks. By introducing PTP into several state-of-the-art VLP frameworks, we observe consistently significant improvements across representative cross-modal learning model architectures and several benchmarks, e.g. zero-shot Flickr30K Retrieval (+4.8 in average recall@1) for ViLT \cite{vilt} baseline, and COCO Captioning (+5.3 in CIDEr) for SOTA BLIP \cite{blip} baseline. Moreover, PTP achieves comparable results with object-detector based methods, and much faster inference speed since PTP discards its object detector for inference while the later cannot. Our code and pre-trained weight will be released at \url{https://github.com/sail-sg/ptp}.
translated by 谷歌翻译
With the continuously thriving popularity around the world, fitness activity analytic has become an emerging research topic in computer vision. While a variety of new tasks and algorithms have been proposed recently, there are growing hunger for data resources involved in high-quality data, fine-grained labels, and diverse environments. In this paper, we present FLAG3D, a large-scale 3D fitness activity dataset with language instruction containing 180K sequences of 60 categories. FLAG3D features the following three aspects: 1) accurate and dense 3D human pose captured from advanced MoCap system to handle the complex activity and large movement, 2) detailed and professional language instruction to describe how to perform a specific activity, 3) versatile video resources from a high-tech MoCap system, rendering software, and cost-effective smartphones in natural environments. Extensive experiments and in-depth analysis show that FLAG3D contributes great research value for various challenges, such as cross-domain human action recognition, dynamic human mesh recovery, and language-guided human action generation. Our dataset and source code will be publicly available at https://andytang15.github.io/FLAG3D.
translated by 谷歌翻译